The security concerns of generative artificial intelligence (AI) use were analyzed in a recent report by ExtraHop. According to the report, 73% of IT and security leaders admit their employees use generative AI tools or large language models (LLM) sometimes or frequently at work, yet, they aren’t sure how to appropriately address security risks.
The report found that IT and security leaders are more concerned about getting inaccurate or nonsensical responses (40%) than security-centric issues, like exposure of customer and employee personally identifiable information (PII) (36%), exposure of trade secrets (33%) and financial loss (25%).
Thirty-two percent of respondents shared that their organization has banned the use of generative AI tools, a similar proportion to those who are very confident in their ability to protect against AI threats (36%). Despite these bans, 5% say employees never use these tools at work, signaling that they are ineffective.
Ninety percent of respondents want the government involved in some way, with 60% favoring mandatory regulations and 30% supporting government standards that businesses can adopt at their own discretion.
Eighty-two percent are very or somewhat confident their current security stack can protect against threats from generative AI tools. However, less than half have invested in technology that helps their organization monitor the use of generative AI. Furthermore, only 46% have policies in place governing acceptable use, and 42% train users on safe use of these tools.
Read the full report here.